翻訳と辞書 |
Neuroevolution of augmenting topologies : ウィキペディア英語版 | Neuroevolution of augmenting topologies NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Ken Stanley in 2002 while at The University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying"). == Performance == On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques and reinforcement learning methods.〔Kenneth O. Stanley and Risto Miikkulainen (2002). "Evolving Neural Networks Through Augmenting Topologies". Evolutionary Computation 10 (2): 99-127〕〔Matthew E. Taylor, Shimon Whiteson, and Peter Stone (2006). "Comparing Evolutionary and Temporal Difference Methods in a Reinforcement Learning Domain". GECCO 2006: Proceedings of the Genetic and Evolutionary Computation Conference.〕
抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Neuroevolution of augmenting topologies」の詳細全文を読む
スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース |
Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.
|
|